Goto

Collaborating Authors

 game mechanic


Baba is LLM: Reasoning in a Game with Dynamic Rules

van Wetten, Fien, Plaat, Aske, van Duijn, Max

arXiv.org Artificial Intelligence

Large language models (LLMs) are known to perform well on language tasks, but struggle with reasoning tasks. This paper explores the ability of LLMs to play the 2D puzzle game Baba is You, in which players manipulate rules by rearranging text blocks that define object properties. Given that this rule-manipulation relies on language abilities and reasoning, it is a compelling challenge for LLMs. Six LLMs are evaluated using different prompt types, including (1) simple, (2) rule-extended and (3) action-extended prompts. In addition, two models (Mistral, OLMo) are finetuned using textual and structural data from the game. Results show that while larger models (particularly GPT-4o) perform better in reasoning and puzzle solving, smaller unadapted models struggle to recognize game mechanics or apply rule changes. Finetuning improves the ability to analyze the game levels, but does not significantly improve solution formulation. We conclude that even for state-of-the-art and finetuned LLMs, reasoning about dynamic rule changes is difficult (specifically, understanding the use-mention distinction). The results provide insights into the applicability of LLMs to complex problem-solving tasks and highlight the suitability of games with dynamically changing rules for testing reasoning and reflection by LLMs.


Cardiverse: Harnessing LLMs for Novel Card Game Prototyping

Li, Danrui, Zhang, Sen, Sohn, Sam S., Hu, Kaidong, Usman, Muhammad, Kapadia, Mubbasir

arXiv.org Artificial Intelligence

The prototyping of computer games, particularly card games, requires extensive human effort in creative ideation and gameplay evaluation. Recent advances in Large Language Models (LLMs) offer opportunities to automate and streamline these processes. However, it remains challenging for LLMs to design novel game mechanics beyond existing databases, generate consistent gameplay environments, and develop scalable gameplay AI for large-scale evaluations. This paper addresses these challenges by introducing a comprehensive automated card game prototyping framework. The approach highlights a graph-based indexing method for generating novel game designs, an LLM-driven system for consistent game code generation validated by gameplay records, and a gameplay AI constructing method that uses an ensemble of LLM-generated action-value functions optimized through self-play. These contributions aim to accelerate card game prototyping, reduce human labor, and lower barriers to entry for game developers.


Unbounded: A Generative Infinite Game of Character Life Simulation

Li, Jialu, Li, Yuanzhen, Wadhwa, Neal, Pritch, Yael, Jacobs, David E., Rubinstein, Michael, Bansal, Mohit, Ruiz, Nataniel

arXiv.org Artificial Intelligence

We introduce the concept of a generative infinite game, a video game that transcends the traditional boundaries of finite, hard-coded systems by using generative models. Inspired by James P. Carse's distinction between finite and infinite games, we leverage recent advances in generative AI to create Unbounded: a game of character life simulation that is fully encapsulated in generative models. Specifically, Unbounded draws inspiration from sandbox life simulations and allows you to interact with your autonomous virtual character in a virtual world by feeding, playing with and guiding it - with open-ended mechanics generated by an LLM, some of which can be emergent. In order to develop Unbounded, we propose technical innovations in both the LLM and visual generation domains. Specifically, we present: (1) a specialized, distilled large language model (LLM) that dynamically generates game mechanics, narratives, and character interactions in real-time, and (2) a new dynamic regional image prompt Adapter (IP-Adapter) for vision models that ensures consistent yet flexible visual generation of a character across multiple environments. We evaluate our system through both qualitative and quantitative analysis, showing significant improvements in character life simulation, user instruction following, narrative coherence, and visual consistency for both characters and the environments compared to traditional related approaches.


Mechanic Maker: Accessible Game Development Via Symbolic Learning Program Synthesis

Sumner, Megan, Saini, Vardan, Guzdial, Matthew

arXiv.org Artificial Intelligence

Game development is a highly technical practice that traditionally requires programming skills. This serves as a barrier to entry for would-be developers or those hoping to use games as part of their creative expression. While there have been prior game development tools focused on accessibility, they generally still require programming, or have major limitations in terms of the kinds of games they can make. In this paper we introduce Mechanic Maker, a tool for creating a wide-range of game mechanics without programming. It instead relies on a backend symbolic learning system to synthesize game mechanics from examples. We conducted a user study to evaluate the benefits of the tool for participants with a variety of programming and game development experience. Our results demonstrated that participants' ability to use the tool was unrelated to programming ability. We conclude that tools like ours could help democratize game development, making the practice accessible regardless of programming skills.


Designing Mixed-Initiative Video Games

Yang, Daijin

arXiv.org Artificial Intelligence

The development of Artificial Intelligence (AI) enables humans to co-create content with machines. The unexpectedness of AI-generated content can bring inspiration and entertainment to users. However, the co-creation interactions are always designed for content creators and have poor accessibility. To explore gamification of mixed-initiative co-creation and make human-AI interactions accessible and fun for players, I prototyped Snake Story, a mixed-initiative game where players can select AI-generated texts to write a story of a snake by playing a "Snake" like game. A controlled experiment was conducted to investigate the dynamics of player-AI interactions with and without the game component in the designed interface. As a result of a study with 11 players (n=11), I found that players utilized different strategies when playing with the two versions, game mechanics significantly affected the output stories, players' creative process, as well as role perceptions, and players with different backgrounds showed different preferences for the two versions. Based on these results, I further discussed considerations for mixed-initiative game design. This work aims to inspire the design of engaging co-creation experiences.


Estimating player completion rate in mobile puzzle games using reinforcement learning

Kristensen, Jeppe Theiss, Valdivia, Arturo, Burelli, Paolo

arXiv.org Artificial Intelligence

In this work we investigate whether it is plausible to use the performance of a reinforcement learning (RL) agent to estimate the difficulty measured as the player completion rate of different levels in the mobile puzzle game Lily's Garden.For this purpose we train an RL agent and measure the number of moves required to complete a level. This is then compared to the level completion rate of a large sample of real players.We find that the strongest predictor of player completion rate for a level is the number of moves taken to complete a level of the ~5% best runs of the agent on a given level. A very interesting observation is that, while in absolute terms, the agent is unable to reach human-level performance across all levels, the differences in terms of behaviour between levels are highly correlated to the differences in human behaviour. Thus, despite performing sub-par, it is still possible to use the performance of the agent to estimate, and perhaps further model, player metrics.


On the Feasibility of Cross-Task Transfer with Model-Based Reinforcement Learning

Xu, Yifan, Hansen, Nicklas, Wang, Zirui, Chan, Yung-Chieh, Su, Hao, Tu, Zhuowen

arXiv.org Artificial Intelligence

Reinforcement Learning (RL) algorithms can solve challenging control problems directly from image observations, but they often require millions of environment interactions to do so. Recently, model-based RL algorithms have greatly improved sample-efficiency by concurrently learning an internal model of the world, and supplementing real environment interactions with imagined rollouts for policy improvement. However, learning an effective model of the world from scratch is challenging, and in stark contrast to humans that rely heavily on world understanding and visual cues for learning new skills. In this work, we investigate whether internal models learned by modern model-based RL algorithms can be leveraged to solve new, distinctly different tasks faster. We propose Model-Based Cross-Task Transfer (XTRA), a framework for sample-efficient online RL with scalable pretraining and finetuning of learned world models. By offline multi-task pretraining and online cross-task finetuning, we achieve substantial improvements over a baseline trained from scratch; we improve mean performance of model-based algorithm EfficientZero by 23%, and by as much as 71% in some instances. Learning Environment (ALE; (Bellemare et al., 2013)) has This task suite has Figure 1. Most recently, EfficientZero Ye et al. (2021), a model-based RL algorithm, has demonstrated impressive sample-efficiency, surpassing human-level performance with as little as 2 hours of real-time game play in select Atari 2600 games from the ALE. This achievement is attributed, in part, to the algorithm concurrently learning an internal model of the environment from interaction, and using the learned model to imagine (simulate) further interactions for planning and policy improvement, thus reducing reliance on real environment interactions for skill acquisition. Model-Based Cross-Task Transfer (XTRA): a sample-efficient online RL framework with scalable pretraining and finetuning of learned world models using auxiliary data from offline tasks. Conversely, humans rely heavily on prior knowledge and visual cues when learning new skills - a study found that human players easily identify visual cues about game mechanics when exposed to a new game, and that human performance is severely degraded if such cues are removed or conflict with prior experiences (Dubey et al., 2018). This pretraining paradigm has recently been extended to visuo-motor control in various forms, e.g., by leveraging frozen (no finetuning) pretrained representations (Xiao et al., 2022; Parisi et al., 2022) or by finetuning in a supervised setting (Reed et al., 2022; Lee et al., 2022).


What's in Store AI-Driven e-Gaming - Coruzant Technologies

#artificialintelligence

As technology advances at an exponential rate, the gaming industry has been one of the early adopters of artificial intelligence (AI) to enhance user experiences. With the global AI market expected to reach $190 billion by 2025, and the global video game market expected to reach over $268 million by 2025, the combination of both of these has the potential to revolutionize e-gaming. With the ability to create more immersive and personalized gaming experiences, AI has already begun to positively affect the gaming sector by providing gamers with smarter and more realistic opponents, advanced game mechanics, and an immersive gaming environment. With AI-powered gaming platforms, gamers can expect a more sophisticated and dynamic gaming experience. AI-powered games can adapt to players' skills and provide customized challenges that are tailored to their abilities.


Revolutionizing Game Design and Development: The Power of AI in the Gaming Industry

#artificialintelligence

AI-generated game worlds: AI algorithms can be used to procedurally generate game worlds, creating unique and dynamic environments for players to explore. This can help reduce the time and resources required for manual world-building and allow for more diverse and intricate game worlds. Adaptive game mechanics: AI can be used to create adaptive game mechanics that respond to player actions and decisions. This can lead to a more personalised and challenging gaming experience, with enemies that respond and adapt to player behaviour. Improved performance optimisation: AI can be used to optimise game performance, such as reducing load times, improving frame rates, and reducing resource usage.


The problem with AI and "content creation" tools

#artificialintelligence

Can you relate to this problem? I can't find anything to play on iOS right now. Now don't worry mobile developers--it's not you, it's me. I don't go for puzzle games or narrative titles because I want something more mindless before I sleep, or while I'm on the bus. I just have a different problem: I've played you already.